基于变压器的语言模型如BERT在大量英语基准上表现出以前的模型,但他们的评估通常限于英语或少量资源的语言。在这项工作中,我们在伯特家族上评估了各种尿潴留的单语,多语言和随机初始化的语言模型,包括爱沙尼亚,芬兰语,匈牙利语,erzya,Moksha,Karelian,Livvi,Komi Permyak,Komi Zyrian,Northern S \' ami,和skolt s''mi。当单晶模型可用时(目前只能等,FI,HU),这些在其母语上表现更好,但一般来说,它们比共享相同字符集的基因无关语言的多语言模型或模型转移。值得注意的是,即使没有特殊努力对封路计优化的特殊努力,高资源模型的直接转移会产生似乎是少数民族尿路语言的艺术POS和NER工具的似乎是有足够的芬特数据的态度。
translated by 谷歌翻译
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes. Recent results in the literature show that representations learned by a single classifier over many classes are competitive on few-shot learning problems with representations learned by special-purpose algorithms designed for such problems. We offer an explanation for this phenomenon based on the concept of class-features variability collapse, which refers to the training dynamics of deep classification networks where the feature embeddings of samples belonging to the same class tend to concentrate around their class means. More specifically, we examine the few-shot error of the learned feature map, which is the classification error of the nearest class-center classifier using centers learned from a small number of random samples from each class. Assuming that the classes appearing in the data are selected independently from a distribution, we show that the few-shot error generalizes from the training data to unseen test data, and we provide an upper bound on the expected few-shot error for new classes (selected from the same distribution) using the average few-shot error for the source classes. Additionally, we show that the few-shot error on the training data can be upper bounded using the degree of class-features variability collapse. This suggests that foundation models can provide feature maps that are transferable to new downstream tasks even with limited data available.
translated by 谷歌翻译
We study the learning dynamics of self-predictive learning for reinforcement learning, a family of algorithms that learn representations by minimizing the prediction error of their own future latent representations. Despite its recent empirical success, such algorithms have an apparent defect: trivial representations (such as constants) minimize the prediction error, yet it is obviously undesirable to converge to such solutions. Our central insight is that careful designs of the optimization dynamics are critical to learning meaningful representations. We identify that a faster paced optimization of the predictor and semi-gradient updates on the representation, are crucial to preventing the representation collapse. Then in an idealized setup, we show self-predictive learning dynamics carries out spectral decomposition on the state transition matrix, effectively capturing information of the transition dynamics. Building on the theoretical insights, we propose bidirectional self-predictive learning, a novel self-predictive algorithm that learns two representations simultaneously. We examine the robustness of our theoretical insights with a number of small-scale experiments and showcase the promise of the novel representation learning algorithm with large-scale experiments.
translated by 谷歌翻译
Dyadic and small group collaboration is an evolutionary advantageous behaviour and the need for such collaboration is a regular occurrence in day to day life. In this paper we estimate the perceived personality traits of individuals in dyadic and small groups over thin-slices of interaction on four multimodal datasets. We find that our transformer based predictive model performs similarly to human annotators tasked with predicting the perceived big-five personality traits of participants. Using this model we analyse the estimated perceived personality traits of individuals performing tasks in small groups and dyads. Permutation analysis shows that in the case of small groups undergoing collaborative tasks, the perceived personality of group members clusters, this is also observed for dyads in a collaborative problem solving task, but not in dyads under non-collaborative task settings. Additionally, we find that the group level average perceived personality traits provide a better predictor of group performance than the group level average self-reported personality traits.
translated by 谷歌翻译
To apply federated learning to drug discovery we developed a novel platform in the context of European Innovative Medicines Initiative (IMI) project MELLODDY (grant n{\deg}831472), which was comprised of 10 pharmaceutical companies, academic research labs, large industrial companies and startups. The MELLODDY platform was the first industry-scale platform to enable the creation of a global federated model for drug discovery without sharing the confidential data sets of the individual partners. The federated model was trained on the platform by aggregating the gradients of all contributing partners in a cryptographic, secure way following each training iteration. The platform was deployed on an Amazon Web Services (AWS) multi-account architecture running Kubernetes clusters in private subnets. Organisationally, the roles of the different partners were codified as different rights and permissions on the platform and administrated in a decentralized way. The MELLODDY platform generated new scientific discoveries which are described in a companion paper.
translated by 谷歌翻译
监督的机器学习为各种计算机视觉问题提供了最新的解决方案。但是,对大量标记的培训数据的需求限制了这些算法在稀缺或昂贵的情况下的这些算法的功能。自我监督的学习提供了一种方法,可以通过对未标记数据的特定域进行预处理模型来降低对手动注释数据的需求。在这种方法中,标记的数据完全需要用于微调下游任务的模型。医疗图像细分是一个标签数据需要专家知识并收集大型标记数据集的领域。因此,自我监督的学习算法有望在该领域进行实质性改进。尽管如此,自我监督的学习算法很少用于预识医学图像分割网络。在本文中,我们详细阐述并分析了对下游医学图像分割的监督和自我监督预审方法的有效性,重点是收敛和数据效率。我们发现,对自然图像和目标域特异性图像进行自我监督的预测会导致最快,最稳定的下游收敛性。在我们对ACDC心脏分割数据集的实验中,与Imagenet预处理的模型相比,这种预处理的方法可实现4-5倍的微调收敛。我们还表明,这种方法需要在域特异性数据上进行少于五个时期的预处理,以在下游收敛时间进行这种改进。最后,我们发现,在低数据方案中,有监督的Imagenet预处理达到了最佳准确性,需要少于100个带注释的样品才能实现接近最小误差。
translated by 谷歌翻译
部分可观察到的马尔可夫决策过程(POMDP)是适用于许多现实世界问题的框架。在这项工作中,我们提出了一种方法,通过依靠解决完全可观察的版本的策略来解决具有多模式信念的POMDP。通过deleinig,基于完全可观察到的变体的值函数的新的混合价值函数,我们可以使用相应的贪婪策略来求解POMDP本身。我们开发了讨论所需的数学框架,并引入了基于侦察盲tictactoe的任务的基准。在此基准测试中,我们表明我们的政策优于政策,而忽略了多种模式的存在。
translated by 谷歌翻译
我们提出了一个新的图形神经网络,我们称为AgentNet,该网络专为图形级任务而设计。 AgentNet的灵感来自子宫性算法,具有独立于图形大小的计算复杂性。代理Net的体系结构从根本上与已知图神经网络的体系结构不同。在AgentNet中,一些受过训练的\ textit {神经代理}智能地行走图,然后共同决定输出。我们提供了对AgentNet的广泛理论分析:我们表明,代理可以学会系统地探索其邻居,并且AgentNet可以区分某些甚至3-WL无法区分的结构。此外,AgentNet能够将任何两个图形分开,这些图在子图方面完全不同。我们通过在难以辨认的图和现实图形分类任务上进行合成实验来确认这些理论结果。在这两种情况下,我们不仅与标准GNN相比,而且与计算更昂贵的GNN扩展相比。
translated by 谷歌翻译
本文有助于加强机器学习与微分方程理论之间的关系。在这种情况下,拟合参数的逆问题,而微分方程与某些测量值的初始条件构成了关键问题。本文探讨了一个可以用于构建损失函数家族的抽象,目的是将初始值问题解决方案拟合到一组离散或连续测量中。可以证明,伴随方程的扩展可以用来推导损失函数的梯度,作为机器学习中反向传播的连续类似物。提供了数值证据,表明在合理控制的情况下,获得的梯度可以在梯度下降中使用,以将初始值问题解决方案拟合到一组连续的嘈杂测量值中,以及一组离散的噪声测量值,这些测量值在不确定的情况下记录下来时代。
translated by 谷歌翻译
We study distributed contextual linear bandits with stochastic contexts, where $N$ agents act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features over the course of $T$ rounds. For this problem, we derive the first ever information-theoretic lower bound $\Omega(dN)$ on the communication cost of any algorithm that performs optimally in a regret minimization setup. We then propose a distributed batch elimination version of the LinUCB algorithm, DisBE-LUCB, where the agents share information among each other through a central server. We prove that the communication cost of DisBE-LUCB matches our lower bound up to logarithmic factors. In particular, for scenarios with known context distribution, the communication cost of DisBE-LUCB is only $\tilde{\mathcal{O}}(dN)$ and its regret is ${\tilde{\mathcal{O}}}(\sqrt{dNT})$, which is of the same order as that incurred by an optimal single-agent algorithm for $NT$ rounds. We also provide similar bounds for practical settings where the context distribution can only be estimated. Therefore, our proposed algorithm is nearly minimax optimal in terms of \emph{both regret and communication cost}. Finally, we propose DecBE-LUCB, a fully decentralized version of DisBE-LUCB, which operates without a central server, where agents share information with their \emph{immediate neighbors} through a carefully designed consensus procedure.
translated by 谷歌翻译